second-order taylor expansion
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Virginia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
To obtain the upper bound of Equation (10), we consider the second-order taylor expansion for L(c(z1),(p1+p2)/2)andL(c(z2),(p1+p2)/2)givenby L c(z1), p1+p2 2 L
Medical Expenditure Dataset (MEPS4): This is used to predict whether a person has high utilizationornot. Fortabular datasets, weuse the batch size of 64, and train the model for a maximum of 20 epochs. Adversarial Training (Adversarial): Consider that the classification model isf(x) = c(g(x)) where g(x) is the encoder andc() is the classification head. For adversarial training, another adversarial classifiercadv()is also constructed. The classification headc()and the adversarial classifier cadv() are trained simultaneously.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Virginia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Understanding Generalization in the Interpolation Regime using the Rate Function
Masegosa, Andrés R., Ortega, Luis A.
In this paper, we present a novel characterization of the smoothness of a model based on basic principles of Large Deviation Theory. In contrast to prior work, where the smoothness of a model is normally characterized by a real value (e.g., the weights' norm), we show that smoothness can be described by a simple real-valued function. Based on this concept of smoothness, we propose an unifying theoretical explanation of why some interpolators generalize remarkably well and why a wide range of modern learning techniques (i.e., stochastic gradient descent, $\ell_2$-norm regularization, data augmentation, invariant architectures, and overparameterization) are able to find them. The emergent conclusion is that all these methods provide complimentary procedures that bias the optimizer to smoother interpolators, which, according to this theoretical analysis, are the ones with better generalization error.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.45)
TEAM: We Need More Powerful Adversarial Examples for DNNs
Qian, Yaguan, Zhang, Ximin, Wang, Bin, Li, Wei, Gu, Zhaoquan, Wang, Haijiang, Swaileh, Wassim
Although deep neural networks (DNNs) have achieved success in many application fields, it is still vulnerable to imperceptible adversarial examples that can lead to misclassification of DNNs easily. To overcome this challenge, many defensive methods are proposed. Indeed, a powerful adversarial example is a key benchmark to measure these defensive mechanisms. In this paper, we propose a novel method (TEAM, Taylor Expansion-Based Adversarial Methods) to generate more powerful adversarial examples than previous methods. The main idea is to craft adversarial examples by minimizing the confidence of the ground-truth class under untargeted attacks or maximizing the confidence of the target class under targeted attacks. Specifically, we define the new objective functions that approximate DNNs by using the second-order Taylor expansion within a tiny neighborhood of the input. Then the Lagrangian multiplier method is used to obtain the optimize perturbations for these objective functions. To decrease the amount of computation, we further introduce the Gauss-Newton (GN) method to speed it up. Finally, the experimental result shows that our method can reliably produce adversarial examples with 100% attack success rate (ASR) while only by smaller perturbations. In addition, the adversarial example generated with our method can defeat defensive distillation based on gradient masking.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Europe > France (0.04)
- Research Report > Promising Solution (0.48)
- Research Report > New Finding (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.95)
- Information Technology > Security & Privacy (0.93)